Product Code Database
Example Keywords: super mario -final $52
   » » Wiki: Neural Processing Unit
Tag Wiki 'Neural Processing Unit'.
Tag

A neural processing unit ( NPU), also known as an AI accelerator or deep learning processor, is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence (AI) and applications, including artificial neural networks and .


Use
Their purpose is either to efficiently execute already trained AI models (inference) or to train AI models. Their applications include for , Internet of things, and data-intensive or sensor-driven tasks. Google using its own AI accelerators. They are often manycore or spatial designs and focus on low-precision arithmetic, novel dataflow architectures, or in-memory computing capability. , a widely used datacenter-grade AI integrated circuit chip, the H100 GPU, of .


Consumer devices
AI accelerators are used in mobile devices such as Apple , AMD
(2023). 9781450394178, Association for Computing Machinery.
in Versal and NPUs, , and smartphones, and seen in many , , , and smartphone processors.

It is more recently (circa 2022) added to computer processors from , , and Apple silicon. All models of Intel processors have a built-in versatile processor unit ( VPU) for accelerating inference for computer vision and deep learning.

On consumer devices, the NPU is intended to be small, power-efficient, but reasonably fast when used to run small models. To do this they are designed to support low-bitwidth operations using data types such as INT4, INT8, , and FP16. A common metric is trillions of operations per second (TOPS), though this metric alone does not quantify which kind of operations are being performed.


Datacenters
(TPU) v4 package (ASIC in center plus 4 HBM stacks) and printed circuit board (PCB) with 4 liquid-cooled packages; the board's front panel has 4 top-side PCIe connectors (2023).]]Accelerators are used in servers: e.g., tensor processing units (TPU) for Google Cloud Platform, and and chips for Amazon Web Services. Many vendor-specific terms exist for devices in this category, and it is an emerging technology without a .

Since the late 2010s, graphics processing units designed by companies such as and often include AI-specific hardware in the form of dedicated functional units for low-precision matrix-multiplication operations. These GPUs are commonly used as AI accelerators, both for and .


Scientific computation
Although NPUs are tailored for low-precision (e.g. FP16, INT8) matrix multiplication operations, they can be used to emulate higher-precision matrix multiplications in scientific computing. As modern GPUs place much focus on making the NPU part fast, using emulated FP64 (Ozaki scheme) on NPUs can potentially outperform native FP64: this has been demonstrated using FP16-emulated FP64 on NVIDIA TITAN RTX and using INT8-emulated FP64 on NVIDIA consumer GPUs and the A100 GPU. (Consumer GPUs are especially benefitted by this scheme as they have small amounts of FP64 hardware capacity, showing a 6× speedup.) Since CUDA Toolkit 13.0 Update 2, cuBLAS automatically uses INT8-emulated FP64 matrix multiplication of the equivalent precision if it's faster than native. This is in addition to the FP16-emulated FP32 feature introduced in version 12.9.


Programming
An operating system or a higher-level library may provide application programming interfaces such as Lite with LiteRT Next (Android) or CoreML (iOS, macOS). Formats such as are used to represent trained neural networks.

Consumer CPU-integrated NPUs are accessible through vendor-specific APIs. AMD (Ryzen AI), Intel (OpenVINO), Apple silicon (CoreML), and Qualcomm (SNPE) each have their own APIs, which can be built upon by a higher-level library.

GPUs generally use existing pipelines such as and adapted for lower precisions and specialized matrix-multiplication operations. is also being used. Custom-built systems such as the Google TPU use private interfaces.

There are a large number of separate underlying acceleration APIs and compilers/runtimes in use in the AI field, causing a great increase in software development effort due to the many combinations involved. As of 2025, the open standard organization is pursuing standardization of AI-related interfaces to reduce the amount of work needed. Khronos is working on three separate fronts: expansion of data types and intrinsic operations in OpenCL and Vulkan, inclusion of compute graphs in , and a /SkriptND file format for describing a neural network.


Notes

See also


External links

Page 1 of 1
1
Page 1 of 1
1

Account

Social:
Pages:  ..   .. 
Items:  .. 

Navigation

General: Atom Feed Atom Feed  .. 
Help:  ..   .. 
Category:  ..   .. 
Media:  ..   .. 
Posts:  ..   ..   .. 

Statistics

Page:  .. 
Summary:  .. 
1 Tags
10/10 Page Rank
5 Page Refs